Qubits Can Be as Safe as Bits, Researchers Show
Introduction
Over the centuries, we have learned to put information into increasingly durable and useful form, from stone tablets to paper to digital media. Beginning in the 1980s, researchers began theorizing about how to store the information inside a quantum computer, where it is subject to all sorts of atomic-scale errors. By the 1990s they had found a few methods, but these methods fell short of their rivals from classical (regular) computers, which provided an incredible combination of reliability and efficiency.
Now, in a preprint posted on November 5, Pavel Panteleev and Gleb Kalachev of Moscow State University have shown that — at least, in theory — quantum information can be protected from errors just as well as classical information can. They did it by combining two exceptionally compatible classical methods and inventing new techniques to prove their properties.
“It’s a huge achievement by Pavel and Gleb,” said Jens Eberhardt of the University of Wuppertal in Germany.
Today, quantum computers can use only around 100 qubits, the quantum equivalent of classical bits. They will need thousands or millions more to become truly useful. The new method for quantum data maintains constant performance as the number of qubits scales up, so it should help keep the size and complexity of future quantum computers to a minimum.
The authors also showed how their quantum method could play a long-sought after role in making classical information testable for errors — at the same time that another group discovered the same capability in a classical method. “It is amazing how a problem that was open for 30 years was solved essentially at the same time by two different groups,” said Alex Lubotzky of the Weizmann Institute of Science in Israel.
Ultimately, we can never protect information perfectly from all errors. We know that we can mathematically represent classical information, such as a word or a number, as a sequence of binary digits or bits, 1s and 0s. But when we actually build these bits, in the form of electrical circuits, we find that unwanted electrical interactions — often simply called noise — cause random bits to flip to the wrong value.
In the 1940s and ’50s, Claude Shannon and Richard Hamming figured out the first solutions, discovering methods to detect and correct errors before computation began.
Hamming’s method was particularly practical. Starting with an initial sequence of bits that represented raw data (for example, the sequence 110101 might represent the number 53), he added new bits to the sequence that acted like receipts specifying how some of the initial bits should sum up. (For example, 110101 could have a digit appended to it, 0, which tells us the sum of all the other bits has an even value.) By checking the data bits against the receipt bits (in this example, making sure they really do sum to an even value), errors could be routinely detected, located and corrected.
These methods for correcting errors are now called error-correcting codes, or merely codes. A code forges a reedy sequence of bits into an iron chain that can be repaired, at the cost of being longer and less efficient.
But for quantum computers, creating a code proved harder. Instead of bits, quantum computers use qubits that are partly 0 and partly 1 — all at once. These are susceptible to two kinds of errors, which either collapse them to a single value (either 0 or 1) or throw off the balance between them. Each kind of error can also interact with the other, making the protection of qubits much more complicated.
“These qubits are bad. They are really, really noisy,” said Panteleev.
In 1995, Peter Shor showed that the problem was, surprisingly, simpler than it seemed. He created a quantum code from a clever combination of two classical codes, one for each type of error. In other words, he forged the quantum ore of qubits into a strong chain as well. But this first quantum code was inefficient, requiring many receipt qubits for an initial sequence.
The technology of classical codes was much more advanced by comparison, with three specific properties known to be obtainable. A code that had all three of them was called simply “good.” First, it should be able to correct many errors (making the chain strong). Second, it should require few receipt bits to be added (making the chain light and efficient). Third, the strength and efficiency of the chain should remain constant, no matter how long a sequence of bits you started with. With that property, called constant scaling, Shannon showed that you could always improve the ability to suppress errors by simply increasing the chain length. This remarkable finding was later reproduced in a quantum context.
After Shor’s work, researchers sought to create quantum codes with the same properties. And they succeeded — for those three. But there was an additional, fourth property of a code they needed, and could not obtain in addition to the other three. Known as low-density parity check (LDPC), it states that each receipt should only sum up a small number of bits (or qubits).
“It’s a nice thing to have for classical codes,” said Nikolas Breuckmann of University College London. “But it’s absolutely indispensable for quantum codes.”
Unfortunately, Shor’s initial approach of combining classical codes broke down when trying to create a good quantum LDPC code. For mathematical reasons, good classical LDPC codes were incompatible, and could not be combined in an optimal way. For over 20 years, no one could figure out how to obtain a quantum code that simultaneously possessed the LDPC property with constant scaling: As quantum LDPC codes increased in length, their strength degraded.
Then in 2020, a series of different researchers, including Panteleev and Kalachev, figured out radically new approaches to combining classical codes to make a quantum code. The quantum chains that they forged still became weaker with increasing length, but not as rapidly as the codes that had come before. Breuckmann and Eberhardt even created a quantum code that they conjectured would possess constant scaling, but they were unable to prove it.
In 2021, Panteleev and Kalachev built on the surge of work to create a new quantum code, which they could prove possessed the elusive combination of all four properties. The distinguishing features of the classical codes that combined to make their quantum code is their symmetry.
The symmetry of a code can be understood by conceiving of it as a graph, a collection of edges (lines) connected by vertices (dots), which is a common perspective in the mathematics of codes. Bits of information are represented as edges of the graph, and receipts are represented as vertices, which sum up all the edges (bits) that touch them. From this perspective, a code with a circular graph can be said to have rotational symmetry, for example. Remarkably, the geometric properties of a graph can be identified with properties of its code. For example, the length of the shortest path around a torus (a doughnut shaped surface) can be identified with the corresponding code’s strength (the number of errors it can correct).
Panteleev and Kalachev’s quantum code is analogous to a combination or product of graphs, each with exceptional symmetry. The quantum code is therefore itself highly symmetric, like a torus produced from two circles. By twisting the torus in various ways, the lengths on its surface can be constantly increased as the number of qubits in the graph becomes larger. Ultimately, this provides constant scaling, in addition to the other three properties.
The result means that quantum codes now match classical codes in their combination of properties. It also provides a means to make quantum computers more efficient, since their ability to correct errors can now (theoretically) remain constant as they are made larger.
“It brings the theoretical quality of these quantum codes to the point that has existed in classical coding for a long time,” said Naomi Nickerson of the quantum computing company PsiQuantum.
In the course of achieving their result, Panteleev and Kalachev also became aware that their quantum code could be interpreted as a classical code with a special property. If the data encoded by their method is filled with a large proportion of errors, this implies that checks of almost any receipt will unveil them. This property is called local testability, and along with the code’s strength and efficiency, it has constant scaling of all three properties, making for a new type of code that had also long evaded researchers.